自主导航的同时本地化和映射(SLAM)框架依赖于强大的数据关联来识别循环封闭以进行后端轨迹优化。对于配备了多层回声器(MBE)的自动水下车辆(AUV),由于海床中可识别的地标的稀缺性,数据关联尤其具有挑战性MBE数据的低分辨率特征。循环封闭检测的深度学习解决方案已显示出来自更结构化环境的数据的出色性能。但是,它们转移到海底领域并不是直接的,并且由于缺乏测深的数据集而阻碍了移植它们的努力。因此,在本文中,我们提出了一种神经网络体系结构,旨在展示将这种技术适应测深数据中对应匹配的潜力。我们从AUV任务中训练我们的框架,并评估其在循环闭合检测任务和粗点云对齐任务上的性能。最后,我们在更传统的方法上展示了其潜力,并释放其实现和所使用的数据集。
translated by 谷歌翻译
可区分渲染的最新进展,可以将相对于3D对象模型计算2D像素值的梯度,可以通过仅在2D监督下通过基于梯度的优化来估计模型参数。将深度神经网络纳入这样的优化管道很容易,从而可以利用深度学习技术。这也大大减少了收集和注释3D数据的要求,例如,在2D传感器构造几何形状时,这对于应用程序非常困难。在这项工作中,我们为侧can声纳图像提出了一个可区分的渲染器。我们进一步证明了它可以解决仅从2D侧can声纳数据直接重建3D海底网眼的反问题的能力。
translated by 谷歌翻译
侧扫声纳强度编码有关海床表面正常变化的信息。但是,其他因素(例如海底几何形状及其材料组成)也会影响回流强度。可以建模这些强度从向前方向上的变化从从测深图和物理特性到测量强度的表面正常的变化,或者可以使用逆模型,该模型从强度开始并模拟表面正常。在这里,我们使用一个逆模型,该模型利用深度学习能够从数据中学习的能力;卷积神经网络用于估计侧扫的正常表面。因此,海床的内部特性仅是隐式学习的。一旦估算了此信息,就可以通过优化框架重建测深图,该框架还包括高度计读数,以提供稀疏的深度轮廓作为约束。最近提出了隐式神经表示学习,以代表这种优化框架中的测深图。在本文中,我们使用神经网络来表示地图并在高度计点的约束和侧can的估计表面正常状态下进行优化。通过从几个侧扫线的不同角度融合多个观测值,通过优化改善了估计的结果。我们通过使用大型侧扫调查的侧扫数据重建高质量的测深,通过重建高质量的测深,证明了该方法的效率和可伸缩性。我们比较了提出的数据驱动的逆模型方法,该方法将侧扫形成前向兰伯特模型。我们通过将每个重建的质量与由多光束传感器构建的数据进行比较来评估它的质量。因此,我们能够讨论每种方法的优点和缺点。
translated by 谷歌翻译
我们提出了一种新型的数据驱动方法,用于从侧扫而言高分辨率测深的重建。侧面声纳(SSS)强度随范围的函数确实包含有关海底斜率的一些信息。但是,必须推断该信息。此外,导航系统提供了估计的轨迹,通常也可以使用沿该轨迹的高度。通过这些,我们获得了非常粗糙的海床测深,作为输入。然后将其与从侧扫的间接但高分辨率的海床信息结合在一起,以估计完整的测深。这个稀疏的深度可以通过单光束回声声音,多普勒速度日志(DVL),其他底部跟踪传感器或底部跟踪算法从侧can本身获得。在我们的工作中,使用一个完全卷积的网络来估算侧扫图像中的深度轮廓及其不确定性,并以端到端的方式稀疏深度。然后将估计的深度与范围一起使用,以计算海底上点的3D位置。可以在融合深度预测和来自神经网络的相应置信度度量后重建高质量的测深图。我们显示了通过使用侧扫而言,仅与侧扫相比,通过使用侧扫而获得的稀疏深度获得了测得图的改进。当将多个测深估计值融合到单个地图中时,我们还显示了置信度加权的好处。
translated by 谷歌翻译
嵌入现实世界网络提出挑战,因为它不清楚如何识别其潜在的几何形状。嵌入了诸如无尺度网络的辅音网络,以欧几里德空间显示出造成的扭曲。将无缝的网络嵌入到双曲线空间提供令人兴奋的替代方案,但在将各种网络与潜在几何图中嵌入不同的几何形状时,扭曲的障碍。我们提出了一种归纳模型,可以利用GCNS和琐碎束的表现力来学习有或没有节点特征的网络的归纳节点表示。琐碎的束是一种简单的纤维束的情况,这是全球的空间,其基础空间和光纤的产品空间。基础空间和纤维的坐标可用于表达产生边缘的分类和抵消因子。因此,该模型能够学习可以表达这些因素的嵌入物。在实践中,与Euclidean和双曲线GCN相比,它会减少链路预测和节点分类的错误。
translated by 谷歌翻译
Driven by improved architectures and better representation learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt, have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However, we found that simply combining these two approaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that achieves a state-of-the-art 88.9% accuracy using only public training data.
translated by 谷歌翻译
A step-search sequential quadratic programming method is proposed for solving nonlinear equality constrained stochastic optimization problems. It is assumed that constraint function values and derivatives are available, but only stochastic approximations of the objective function and its associated derivatives can be computed via inexact probabilistic zeroth- and first-order oracles. Under reasonable assumptions, a high-probability bound on the iteration complexity of the algorithm to approximate first-order stationarity is derived. Numerical results on standard nonlinear optimization test problems illustrate the advantages and limitations of our proposed method.
translated by 谷歌翻译
Masked image modeling (MIM) has shown great promise for self-supervised learning (SSL) yet been criticized for learning inefficiency. We believe the insufficient utilization of training signals should be responsible. To alleviate this issue, we introduce a conceptually simple yet learning-efficient MIM training scheme, termed Disjoint Masking with Joint Distillation (DMJD). For disjoint masking (DM), we sequentially sample multiple masked views per image in a mini-batch with the disjoint regulation to raise the usage of tokens for reconstruction in each image while keeping the masking rate of each view. For joint distillation (JD), we adopt a dual branch architecture to respectively predict invisible (masked) and visible (unmasked) tokens with superior learning targets. Rooting in orthogonal perspectives for training efficiency improvement, DM and JD cooperatively accelerate the training convergence yet not sacrificing the model generalization ability. Concretely, DM can train ViT with half of the effective training epochs (3.7 times less time-consuming) to report competitive performance. With JD, our DMJD clearly improves the linear probing classification accuracy over ConvMAE by 5.8%. On fine-grained downstream tasks like semantic segmentation, object detection, etc., our DMJD also presents superior generalization compared with state-of-the-art SSL methods. The code and model will be made public at https://github.com/mx-mark/DMJD.
translated by 谷歌翻译
Considering the computation complexity, we propose a Guided Hybrid Quantization with One-to-one Self-Teaching (GHOST}) framework. More concretely, we first design a structure called guided quantization self-distillation (GQSD), which is an innovative idea for realizing lightweight through the synergy of quantization and distillation. The training process of the quantization model is guided by its full-precision model, which is time-saving and cost-saving without preparing a huge pre-trained model in advance. Second, we put forward a hybrid quantization (HQ) module to obtain the optimal bit width automatically under a constrained condition where a threshold for distribution distance between the center and samples is applied in the weight value search space. Third, in order to improve information transformation, we propose a one-to-one self-teaching (OST) module to give the student network a ability of self-judgment. A switch control machine (SCM) builds a bridge between the student network and teacher network in the same location to help the teacher to reduce wrong guidance and impart vital knowledge to the student. This distillation method allows a model to learn from itself and gain substantial improvement without any additional supervision. Extensive experiments on a multimodal dataset (VEDAI) and single-modality datasets (DOTA, NWPU, and DIOR) show that object detection based on GHOST outperforms the existing detectors. The tiny parameters (<9.7 MB) and Bit-Operations (BOPs) (<2158 G) compared with any remote sensing-based, lightweight or distillation-based algorithms demonstrate the superiority in the lightweight design domain. Our code and model will be released at https://github.com/icey-zhang/GHOST.
translated by 谷歌翻译
Automatic font generation without human experts is a practical and significant problem, especially for some languages that consist of a large number of characters. Existing methods for font generation are often in supervised learning. They require a large number of paired data, which are labor-intensive and expensive to collect. In contrast, common unsupervised image-to-image translation methods are not applicable to font generation, as they often define style as the set of textures and colors. In this work, we propose a robust deformable generative network for unsupervised font generation (abbreviated as DGFont++). We introduce a feature deformation skip connection (FDSC) to learn local patterns and geometric transformations between fonts. The FDSC predicts pairs of displacement maps and employs the predicted maps to apply deformable convolution to the low-level content feature maps. The outputs of FDSC are fed into a mixer to generate final results. Moreover, we introduce contrastive self-supervised learning to learn a robust style representation for fonts by understanding the similarity and dissimilarities of fonts. To distinguish different styles, we train our model with a multi-task discriminator, which ensures that each style can be discriminated independently. In addition to adversarial loss, another two reconstruction losses are adopted to constrain the domain-invariant characteristics between generated images and content images. Taking advantage of FDSC and the adopted loss functions, our model is able to maintain spatial information and generates high-quality character images in an unsupervised manner. Experiments demonstrate that our model is able to generate character images of higher quality than state-of-the-art methods.
translated by 谷歌翻译